159 research outputs found

    A Neural Spiking Approach Compared to Deep Feedforward Networks on Stepwise Pixel Erasement

    Full text link
    In real world scenarios, objects are often partially occluded. This requires a robustness for object recognition against these perturbations. Convolutional networks have shown good performances in classification tasks. The learned convolutional filters seem similar to receptive fields of simple cells found in the primary visual cortex. Alternatively, spiking neural networks are more biological plausible. We developed a two layer spiking network, trained on natural scenes with a biologically plausible learning rule. It is compared to two deep convolutional neural networks using a classification task of stepwise pixel erasement on MNIST. In comparison to these networks the spiking approach achieves good accuracy and robustness.Comment: Published in ICANN 2018: Artificial Neural Networks and Machine Learning - ICANN 2018 https://link.springer.com/chapter/10.1007/978-3-030-01418-6_25 The final authenticated publication is available online at https://doi.org/10.1007/978-3-030-01418-6_2

    Towards democratizing and automating online conferences: lessons from the neuromatch conferences

    Get PDF
    Legacy conferences are costly and time consuming, and exclude scientists lacking various resources or abilities. During the 2020 pandemic, we created an online conference platform, Neuromatch Conferences (NMC), aimed at developing technological and cultural changes to make conferences more democratic, scalable, and accessible. We discuss the lessons we learned

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    Gain control network conditions in early sensory coding

    Get PDF
    Gain control is essential for the proper function of any sensory system. However, the precise mechanisms for achieving effective gain control in the brain are unknown. Based on our understanding of the existence and strength of connections in the insect olfactory system, we analyze the conditions that lead to controlled gain in a randomly connected network of excitatory and inhibitory neurons. We consider two scenarios for the variation of input into the system. In the first case, the intensity of the sensory input controls the input currents to a fixed proportion of neurons of the excitatory and inhibitory populations. In the second case, increasing intensity of the sensory stimulus will both, recruit an increasing number of neurons that receive input and change the input current that they receive. Using a mean field approximation for the network activity we derive relationships between the parameters of the network that ensure that the overall level of activity of the excitatory population remains unchanged for increasing intensity of the external stimulation. We find that, first, the main parameters that regulate network gain are the probabilities of connections from the inhibitory population to the excitatory population and of the connections within the inhibitory population. Second, we show that strict gain control is not achievable in a random network in the second case, when the input recruits an increasing number of neurons. Finally, we confirm that the gain control conditions derived from the mean field approximation are valid in simulations of firing rate models and Hodgkin-Huxley conductance based models

    A biophysical model of dynamic balancing of excitation and inhibition in fast oscillatory large-scale networks

    Get PDF
    Over long timescales, neuronal dynamics can be robust to quite large perturbations, such as changes in white matter connectivity and grey matter structure through processes including learning, aging, development and certain disease processes. One possible explanation is that robust dynamics are facilitated by homeostatic mechanisms that can dynamically rebalance brain networks. In this study, we simulate a cortical brain network using the Wilson-Cowan neural mass model with conduction delays and noise, and use inhibitory synaptic plasticity (ISP) to dynamically achieve a spatially local balance between excitation and inhibition. Using MEG data from 55 subjects we find that ISP enables us to simultaneously achieve high correlation with multiple measures of functional connectivity, including amplitude envelope correlation and phase locking. Further, we find that ISP successfully achieves local E/I balance, and can consistently predict the functional connectivity computed from real MEG data, for a much wider range of model parameters than is possible with a model without ISP

    From Model Specification to Simulation of Biologically Constrained Networks of Spiking Neurons.

    Get PDF
    A declarative extensible markup language (SpineML) for describing the dynamics, network and experiments of large-scale spiking neural network simulations is described which builds upon the NineML standard. It utilises a level of abstraction which targets point neuron representation but addresses the limitations of existing tools by allowing arbitrary dynamics to be expressed. The use of XML promotes model sharing, is human readable and allows collaborative working. The syntax uses a high-level self explanatory format which allows straight forward code generation or translation of a model description to a native simulator format. This paper demonstrates the use of code generation in order to translate, simulate and reproduce the results of a benchmark model across a range of simulators. The flexibility of the SpineML syntax is highlighted by reproducing a pre-existing, biologically constrained model of a neural microcircuit (the striatum). The SpineML code is open source and is available at http://bimpa.group.shef.ac.uk/SpineML

    A cortical motor nucleus drives the basal ganglia-recipient thalamus in singing birds

    Get PDF
    The pallido-recipient thalamus transmits information from the basal ganglia to the cortex and is critical for motor initiation and learning. Thalamic activity is strongly inhibited by pallidal inputs from the basal ganglia, but the role of nonpallidal inputs, such as excitatory inputs from cortex, remains unclear. We simultaneously recorded from presynaptic pallidal axon terminals and postsynaptic thalamocortical neurons in a basal ganglia–recipient thalamic nucleus that is necessary for vocal variability and learning in zebra finches. We found that song-locked rate modulations in the thalamus could not be explained by pallidal inputs alone and persisted following pallidal lesion. Instead, thalamic activity was likely driven by inputs from a motor cortical nucleus that is also necessary for singing. These findings suggest a role for cortical inputs to the pallido-recipient thalamus in driving premotor signals that are important for exploratory behavior and learning.National Institutes of Health (U.S.) (Grant R01DC009183)National Institutes of Health (U.S.) (Grant K99NS067062)Damon Runyon Cancer Research Foundation (Postdoctoral Fellowship)Charles A. King Trust (Postdoctoral Fellowship

    Growth Rules for the Repair of Asynchronous Irregular Neuronal Networks after Peripheral Lesions

    Get PDF
    © 2021 Sinha et al. This is an open access article distributed under the terms of the Creative Commons Attribution License. https://creativecommons.org/licenses/by/4.0/Several homeostatic mechanisms enable the brain to maintain desired levels of neuronal activity. One of these, homeostatic structural plasticity, has been reported to restore activity in networks disrupted by peripheral lesions by altering their neuronal connectivity. While multiple lesion experiments have studied the changes in neurite morphology that underlie modifications of synapses in these networks, the underlying mechanisms that drive these changes are yet to be explained. Evidence suggests that neuronal activity modulates neurite morphology and may stimulate neurites to selective sprout or retract to restore network activity levels. We developed a new spiking network model of peripheral lesioning and accurately reproduced the characteristics of network repair after deafferentation that are reported in experiments to study the activity dependent growth regimes of neurites. To ensure that our simulations closely resemble the behaviour of networks in the brain, we model deafferentation in a biologically realistic balanced network model that exhibits low frequency Asynchronous Irregular (AI) activity as observed in cerebral cortex. Our simulation results indicate that the re-establishment of activity in neurons both within and outside the deprived region, the Lesion Projection Zone (LPZ), requires opposite activity dependent growth rules for excitatory and inhibitory post-synaptic elements. Analysis of these growth regimes indicates that they also contribute to the maintenance of activity levels in individual neurons. Furthermore, in our model, the directional formation of synapses that is observed in experiments requires that pre-synaptic excitatory and inhibitory elements also follow opposite growth rules. Lastly, we observe that our proposed structural plasticity growth rules and the inhibitory synaptic plasticity mechanism that also balances our AI network both contribute to the restoration of the network to pre-deafferentation stable activity levels.Peer reviewe

    Learning, Memory, and the Role of Neural Network Architecture

    Get PDF
    The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems
    corecore